Synchronization Examples
Classic Problems of Synchronization
The Bounded-Buffer Problem(producer and consumer problems)
The structure of the producer process
1 | while(true){ |
The structure of the consumer process
1 | while(true){ |
The Readers–Writers Problem
the first readers–writers problem The structure of a reader process
The structure of a writer process1
2
3
4
5
6
7
8
9
10
11
12
13while (true){
wait (mutex);
read_count++;
if (read_count == 1)
wait (rw mutex);
signal (mutex);
•/* reading is performed */ •
wait (mutex);
read_count--;
if (read_count == 0)
signal (rw mutex);
signal (mutex);
}1
2
3
4
5while (true){
wait (rw mutex);
•/* writing is performed */
signal(rw mutex);
}The second readers–writers problem
A solution to either problem may result in starvation
The readers–writers problem and its solutions have been generalized to provide reader–writer locks on some systems. Acquiring a reader–writer lock requires specifying the mode of the lock: either read or write access
Reader–writer locks are most useful in the following situations:
- In applications where it is easy to identify which processes only read shared data and which processes only write shared data.
- In applications that have more readers than writers.
The Dining-Philosophers Problem
Semaphore Solution
- Several possible remedies to the deadlock problem are the following:
- Allow at most four philosophers to be sitting simultaneously at the table
- Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this, she must pick them up in a critical section).
- Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left chopstick and then her right chopstick, whereas an even numbered philosopher picks up her right chopstick and then her left chopstick.
- Note, however, that any satisfactory solution to the dining-philosophers problem must guard against the possibility that one of the philosophers will starve to death
Monitor Solution
Synchronization within the Kernel
Synchronization in Windows
- On a multiprocessor system, Windows protects access to global resources using spinlocks, although the kernel uses spinlocks only to protect short code segments.
- For thread synchronization outside the kernel, Windows provides dispatcher objects
- Using a dispatcher object, threads synchronize according to several
different mechanisms, includingmutex locks, semaphores, events,
and timers.
- Events are similar to condition variables; that is,
they may notify a waiting thread when a desired condition occurs.
- Timers are used to notify one (or more than one) thread that a specified amount of time has expired
- Events are similar to condition variables; that is,
they may notify a waiting thread when a desired condition occurs.
- Dispatcher objects may be in either a signaled state or a
non-signaled state.
- An object in a signaled state is available, and a thread will not block when acquiring the object.
- An object in a non-signaled state is not available, and a thread will block when attempting to acquire the object.
Synchronization in Linux
- the simplest synchronization technique within the Linux kernel is an atomic integer.which is represented using the opaque data type atomic_t.
- Mutex locks are available in Linux for protecting critical sections within the kernel.
- Linux also provides spinlocks and semaphores (as well as reader–writer versions of these two locks) for locking in the kernel.
- On SMP machines, the fundamental locking mechanism is a spinlock, and the kernel is designed so that the spinlock is held only for short durations.
Single Processor | Multiple Processors |
---|---|
Disable kernel preemption | Acquire spin lock |
Enable kernel preemption | Release spin lock |
- In the Linux kernel, both spinlocks and mutex locks are nonrecurive which means that if a thread has acquired one of these locks, it cannot acquire the same lock a second time without first releasing the lock.
- When a lock must be held for a longer period, semaphores or mutex locks are appropriate for use.
POSIX Synchronization
- The synchronization methods discussed in the preceding section pertain to synchronization within the kernel and are therefore available only to kernel developers. In contrast, the POSIX API is available for programmers at the user level and is not part of any particular operating-system kernel.
POSIX Mutex Locks
1 | #include <pthread.h> |
All mutex functions return a value of 0 with correct operation; if an error occurs, these functions return a nonzero error code.
POSIX Semaphores
- POSIX specifies two types of semaphores—named and unnamed.
POSIX Named Semaphores
1 | #include <semaphore.h> |
- In this instance, we are naming the semaphore SEM. The 0_CREAT flag indicates that the semaphore will be created if it does not already exist. Additionally, the semaphore has read and write access for other processes (via the parameter 0666) and is initialized to 1
- Both Linux and macOS systems provide POSIX named semaphores.
- The advantage of named semaphores is that multiple unrelated processes can easily use a common semaphore as a synchronization mechanism by simply referring to the semaphore’s name
POSIX Unnamed Semaphores
1 | #include <semaphore.h> |
- In this example, by passing the flag 0, to indicate the level of sharing. we are indicating that this semaphore can be shared only by threads belonging to the process that created the semaphore.(If we supplied a nonzero value, we could allow the semaphore to be shared between separate processes by placing it in a region of shared memory.) In addition, we initialize the semaphore to the value 1.
- Just like mutex locks, all semaphore functions return 0 when successful and nonzero when an error condition occurs
POSIX Condition Variables
- condition variables are used within the context of a monitor, which provides a locking mechanism to ensure data integrity
- Since Pthreads is typically used in C programs—and since C does not have a monitor— we accomplish locking by associating a condition variable with a mutex lock.
1 | Pthread_mutex_t mutex; |
The following code illustrates how a thread can wait for the condition a == b to become true using a Pthread condition variable
1 | pthread_mutex_lock (&mutex); |
- The mutex lock associated with the condition variable must be locked before the pthread_cond_wait() function is called
- Calling pthread_cond_wait() releases the mutex lock ,thereby allowing another thread to access the shared data and possibly update its value so that the condition clause evaluates to true
- it is important to place the conditional clause within a loop so that the condition is rechecked after being signaled
- It is important to note that the call to pthread_cond_signal() does
not release the mutex lock.
1
2
3
4pthread_mutex_lock (&mutex);
a=b;
pthread_cond_signal(&cond_var);
pthread_mutex_unlock (&mutex);
Synchronization in Java
Java Monitors
- Every object in Java has associated with it a single lock. When a method is declared to be synchronized, calling the method requires owning the lock for the object. We declare a synchronized method by placing the synchronized keyword in the method definition, such as with the insert() and remove() methods in the BoundedBuffer class.
- Invoking a synchronized method requires owning the lock on an object
instance of BoundedBuffer.
- If the lock is already owned by another thread, the thread calling the synchronized method blocks and is placed in the entry set for the object’s lock.
- The entry set represents the set of threads waiting for the lock to become available.
- If the entry set for the lock is not empty when the lock is released, the JVM arbitrarily selects a thread from this set to be the owner of the lock
- In addition to having a lock, every object also has associated with it a wait set consisting of a set of threads,This wait set is initially empty. When a thread enters a synchronized method, it owns the lock for the object. However, this thread may determine that it is unable to continue because a certain condition has not been met
- When a thread calls the wait() method, the
following happens:
- The thread releases the lock for the object
- The state of the thread is set to blocked
- The thread is placed in the wait set for the object
- at the end of the insert() and remove() methods, we have a call to
the method notify().The call to notify():
- Picks an arbitrary thread T from the list of threads in the wait set
- Moves T from the wait set to the entry set
- Sets the state of T from blocked to runnable
- insert() and remove() methods using wait() and notify() to Figure 7.11
Reentrant Locks
- Perhaps the simplest locking mechanism available in the API is the Reentrant-Lock.
- A thread acquires a ReentrantLock lock by invoking its lock() method.
- If the lock is available—or if the thread invoking lock() already owns it, which is why it is termed reentrant—lock() assigns the invoking thread lock ownership and returns control.
Semaphores
- The Java API also provides a counting semaphore
Condition Variables
Alternative Approaches
Transactional Memory
- The concept of transactional memory originated in database theory, for example, yet it provides a strategy for process synchronization. A memory transaction is a sequence of memory read–write operations that are atomic. If all operations in a transaction are completed, the memory transaction is committed. Otherwise, the operations must be aborted and rolled back.
- The advantage of using such a mechanism rather than locks is that the transactional memory system—not the developer—is responsible for guaranteeing atomicity. Additionally, because no locks are involved, deadlock is not possible
- Software transactional memory (STM), as the name suggests, implements transactional memory exclusively in software—no special hardware is needed.STM works by inserting instrumentation code inside transaction blocks. The code is inserted by a compiler
- Hardware transactional memory (HTM) uses hardware cache hierarchies and cache coherency protocols to manage and resolve conflicts involving shared data residing in separate processors’ caches
- HTM requires no special code instrumentation and thus has less overhead than STM
OpenMP
- The advantage of OpenMP (and similar tools) is that thread creation and management are handled by the OpenMP library and are not the responsibility of application developers.
- An advantage of using the critical-section compiler directive in OpenMP is that it is generally considered easier to use than standard mutex locks
- because the critical-section compiler directive behaves much like a mutex lock, deadlock is still possible when two or more critical sections are identified.
Functional Programming Languages
- Most well-known programming languages—such as C, C++, Java, and C#—are known as imperative (or procedural) languages
- The fundamental difference between imperative and functional languages is that functional languages do not maintain state.
- That is, once a variable has been defined and assigned a value, its value is immutable—it cannot change. Because functional languages disallow mutable state, they need not be concerned with issues such as race conditions and deadlocks.
- Several functional languages are presently in use, and we briefly mention two of them here: Erlang and Scala